5,831 research outputs found
TZC: Efficient Inter-Process Communication for Robotics Middleware with Partial Serialization
Inter-process communication (IPC) is one of the core functions of modern
robotics middleware. We propose an efficient IPC technique called TZC (Towards
Zero-Copy). As a core component of TZC, we design a novel algorithm called
partial serialization. Our formulation can generate messages that can be
divided into two parts. During message transmission, one part is transmitted
through a socket and the other part uses shared memory. The part within shared
memory is never copied or serialized during its lifetime. We have integrated
TZC with ROS and ROS2 and find that TZC can be easily combined with current
open-source platforms. By using TZC, the overhead of IPC remains constant when
the message size grows. In particular, when the message size is 4MB (less than
the size of a full HD image), TZC can reduce the overhead of ROS IPC from tens
of milliseconds to hundreds of microseconds and can reduce the overhead of ROS2
IPC from hundreds of milliseconds to less than 1 millisecond. We also
demonstrate the benefits of TZC by integrating with TurtleBot2 that are used in
autonomous driving scenarios. We show that by using TZC, the braking distance
can be shortened by 16% than ROS
ΠΠΏΡΡ ΠΏΠΎΡΡΠΈΠΆΠ΅Π½ΠΈΡ ΠΈΡΡΠΎΡΠΈΠΈ, ΠΊΠ°ΠΊ Π΄ΡΡ ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ°Π·Π²ΠΈΡΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΡΠ΅ΡΡΠ²Π°
Π ΡΡΠ°ΡΡΠ΅ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡΡΡ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΎΡΠ½ΠΎΠ²Ρ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΊΠΎΠ½ΡΠ΅ΠΏΡΠΈΠΈ ΠΈΡΡΠΎΡΠΈΠΈ Π΄ΡΡ
ΠΎΠ²Π½ΠΎΠ³ΠΎ ΡΠ°Π·Π²ΠΈΡΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΡΠ΅ΡΡΠ²Π°. ΠΠ΅Π»Π°Π΅ΡΡΡ Π²ΡΠ²ΠΎΠ΄ ΠΎΠ± ΠΎΠ³ΡΠ°Π½ΠΈΡΠ΅Π½Π½ΠΎΡΡΠΈ ΠΌΠΎΠ½ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Π° ΠΊ Π°Π½Π°Π»ΠΈΠ·Ρ ΠΈΡΡΠΎΡΠΈΠΈ ΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΡΠ΅ΡΠΊΠΎΠ³ΠΎ Π΄ΡΡ
Π°, ΠΊΠ°ΠΊ ΠΌΠ°ΡΠ΅ΡΠΈΠ°Π»ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ, ΡΠ°ΠΊ ΠΈ ΠΈΠ΄Π΅Π°Π»ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ. Π Π°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠΈ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ Π΄ΡΠ°Π»ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Π°, ΠΎΡΠ½ΠΎΠ²Π°Π½Π½ΠΎΠ³ΠΎ Π½Π° ΠΏΡΠΈΠ½ΡΠΈΠΏΠ°Ρ
Π΅Π΄ΠΈΠ½ΡΡΠ²Π° Π΄ΡΡ
ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΈ ΠΌΠ°ΡΠ΅ΡΠΈΠ°Π»ΡΠ½ΠΎΠ³ΠΎ (Π΄ΡΡ
Π²Π½Π΅ ΠΌΠ°ΡΠ΅ΡΠΈΠΈ Π½Π΅ ΡΡΡΠ΅ΡΡΠ²ΡΠ΅Ρ, ΠΌΠ°ΡΠ΅ΡΠΈΡ Π²Π½Π΅ Π΄ΡΡ
Π° Π±Π΅ΡΡΠΌΡΡΠ»Π΅Π½Π½Π°), ΡΠ°ΡΠΊΡΡΡΠΈΡ ΠΈΡ
Π²Π·Π°ΠΈΠΌΠΎΠ΄Π΅ΠΉΡΡΠ²ΠΈΡ Π² Π³Π»ΠΎΠ±Π°Π»ΡΠ½ΡΡ
ΠΏΡΠΎΡΠΈΠ²ΠΎΡΠ΅ΡΠΈΡΡ
ΡΠΏΠΎΡ
ΠΈ ΠΈ ΡΠ½ΡΡΠΈΡ ΠΈΡ
Π² ΠΏΡΠΎΡΠ΅ΡΡΠ΅ ΡΠΈΠ²ΠΈΠ»ΠΈΠ·ΠΎΠ²Π°Π½Π½ΠΎΠ³ΠΎ ΠΏΠ΅ΡΠ΅ΡΡΡΡΠΎΠΉΡΡΠ²Π° ΠΌΠΈΡΠ°. Π Π°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡΡΡ Π³ΡΠΌΠ°Π½ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ Π°ΡΠΏΠ΅ΠΊΡΡ ΡΠΈΠ²ΠΈΠ»ΠΈΠ·Π°ΡΠΈΠΈ, ΠΊΠ°ΠΊ Π½Π°ΠΈΠ²ΡΡΡΠ΅ΠΉ ΡΠΎΡΠΌΡ "ΠΊΡΠ»ΡΡΡΡΠ½ΠΎΠΉ ΠΎΠ±ΡΠ½ΠΎΡΡΠΈ", "ΡΠΏΠΎΡΠΎΠ±Π° ΡΡΡΠ΅ΡΡΠ²ΠΎΠ²Π°Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΡΠ°Π·ΡΠΌΠ° Π²ΠΎ ΠΡΠ΅Π»Π΅Π½Π½ΠΎΠΉ", ΡΠ°ΡΠΊΡΡΡΠΈΡ ΠΈ ΠΎΠ±ΡΠ΅ΡΠ΅Π½ΠΈΡ ΡΠ²ΠΎΠ±ΠΎΠ΄Ρ Π² ΠΏΡΠ΅ΠΎΠ±ΡΠ°Π·ΠΎΠ²Π°Π½ΠΈΠΈ ΠΌΠΈΡΠ°. ΠΠ° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΏΡΠΎΠ΄Π²ΠΈΠΆΠ΅Π½ΠΈΡ ΡΠ΅Π»ΠΎΠ²Π΅ΡΠ΅ΡΡΠ²Π° ΠΎΡ ΠΊΠΎΡΠΌΠΎΠ³Π΅Π½Π½ΠΎΠΉ β ΠΊ ΡΠ΅Ρ
Π½ΠΎΠ³Π΅Π½Π½ΠΎΠΉ, ΠΈ ΠΎΡ Π½Π΅Ρ β Π°Π½ΡΡΠΎΠΏΠΎΠ³Π΅Π½Π½ΠΎΠΉ ΡΠΈΠ²ΠΈΠ»ΠΈΠ·Π°ΡΠΈΠΈ, Π²ΡΠ΄Π΅Π»ΡΠ΅ΡΡΡ ΡΠΈΠΏΡ ΡΡΠ°Π΄ΠΈΡΠΈΠΎΠ½Π½ΠΎΠΉ, ΠΈΠ½Π½ΠΎΠ²Π°ΡΠΈΠΎΠ½Π½ΠΎΠΉ ΠΈ Π»ΠΈΠ±Π΅ΡΠ°Π»ΡΠ½ΠΎΠΉ Π΄ΡΡ
ΠΎΠ²Π½ΠΎΡΡΠΈ. ΠΠ΅Π»Π°Π΅ΡΡΡ Π²ΡΠ²ΠΎΠ΄ ΠΎ ΡΠΎΠΌ, ΡΡΠΎ ΠΊΡΠΈΠ·ΠΈΡ Π»ΠΈΠ±Π΅ΡΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΡΠΈΠΏΠ° Π΄ΡΡ
ΠΎΠ²Π½ΠΎΡΡΠΈ ΠΏΡΠ΅Π΄ΠΏΠΎΠ»Π°Π³Π°Π΅Ρ ΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈΠ½ΡΠ΅Π»Π»Π΅ΠΊΡΡΠ°Π»ΡΠ½ΠΎ-Π½ΡΠ°Π²ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΡΠΈΠΏΠ° Π΄ΡΡ
ΠΎΠ²Π½ΠΎΡΡΠΈ, ΠΊΠ°ΠΊ Π΄ΡΡ
ΠΎΠ²Π½ΠΎΡΡΠΈ ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠ³ΠΎ ΡΠ΅Ρ
Π½ΠΎΡΡΠΎΠ½Π½ΠΎΠ³ΠΎ ΠΎΠ±ΡΠ΅ΡΡΠ²Π°
3D indoor scene modeling from RGB-D data: a survey
3D scene modeling has long been a fundamental problem in computer graphics and computer vision. With the popularity of consumer-level RGB-D cameras, there is a growing interest in digitizing real-world indoor 3D scenes. However, modeling indoor 3D scenes remains a challenging problem because of the complex structure of interior objects and poor quality of RGB-D data acquired by consumer-level sensors. Various methods have been proposed to tackle these challenges. In this survey, we provide an overview of recent advances in indoor scene modeling techniques, as well as public datasets and code libraries which can facilitate experiments and evaluation
Static scene illumination estimation from video with applications
We present a system that automatically recovers scene geometry and illumination from a video, providing a basis for various applications. Previous image based illumination estimation methods require either user interaction or external information in the form of a database. We adopt structure-from-motion and multi-view stereo for initial scene reconstruction, and then estimate an environment map represented by spherical harmonics (as these perform better than other bases). We also demonstrate several video editing applications that exploit the recovered geometry and illumination, including object insertion (e.g., for augmented reality), shadow detection, and video relighting
S4Net: Single Stage Salient-Instance Segmentation
We consider an interesting problem-salient instance segmentation in this
paper. Other than producing bounding boxes, our network also outputs
high-quality instance-level segments. Taking into account the
category-independent property of each target, we design a single stage salient
instance segmentation framework, with a novel segmentation branch. Our new
branch regards not only local context inside each detection window but also its
surrounding context, enabling us to distinguish the instances in the same scope
even with obstruction. Our network is end-to-end trainable and runs at a fast
speed (40 fps when processing an image with resolution 320x320). We evaluate
our approach on a publicly available benchmark and show that it outperforms
other alternative solutions. We also provide a thorough analysis of the design
choices to help readers better understand the functions of each part of our
network. The source code can be found at
\url{https://github.com/RuochenFan/S4Net}
- β¦